Updated: 1 Dec. 2019 [See Addendum below.]
November's blog, similar to all recent ones, is short. Perhaps the oldster (me) has finally learned that shorter is better or is it due to neuronal changes of normal aging?
The idea for the blog was initially stimulated by an article (Artificial Intelligence: A Primer for the Laboratory Leader) in CSMLS's LabBuzz, Nov. 22. (Further Reading). Naturally, this led me to read many more AI articles, some of which are included in Further Reading below.
The title derives from a ditty composed and sung by Johnny Nash.
INTRODUCTION
As someone whose career was marked by many dramatic changes, I'm interested in what the 'next big thing' is. One candidate is artificial intelligence (AI).
I was particularly struck by the authors' (of Artificial Intelligence: A Primer for the Laboratory Leader) choice of six 'Roles of Laboratory Managers in the Post-AI Laboratory' See the article for a description of the outcomes of each role or see the screen shot from the article:
To me, many of these roles exist in the pre-AI lab and may be fulfilled by the lab manager or medical director, depending on the laboratory. The authors mention a quote attributed to the Greek Heraclitus, who lived ~500 BC:
- "Change is the only constant in life."
Authors' learning points: Welcome all change, it's inevitable and will take us to a better and brighter future. Think, 'Robots are coming to help us' not take our jobs.
Fair enough. Change is inevitable. Not sure it's always good, though, as many technological changes are a mixed bag of pros and cons.
Sidebar: Must admit that the robot comment reminds me of Reagan's "I'm from the government and I'm here to help", a late-1970s cliché. Reagan was the less-government POTUS who believed in trickle-down economics: tax breaks and benefits for corporations and the wealthy will trickle down to everyone else. Except the theory didn't work well. Reagan also opted to end federal funding for mental health programs to cut the budget. The consequences of Reagan's social policy? ~One-third of the USA's homeless suffer from severe mental illness, which puts a burden on police departments, hospitals and the penal system.
To me, a more apt cliché is one prevalent in the 1990s in Alberta, Canada when government health care cuts and restructuring decimated the laboratory and broader health system. They hired consultants to do the dirty work, then leave. Many in the lab community called them 'suits.' (See Further Reading)
- "I'm a consultant and I'm here to help."
Managerial roles pre-AI often include the manager performing the following functions:
- Assume leadership, which includes motivating staff to achieve a common goal and being a role model for key qualities like dedication and integrity;
- Communicate to lab staff and beyond the lab;
- Delegate responsibilities to staff;
- Manage projects and budgets;
- Organise and chair meetings;
- Comply with mandatory laboratory regulations;
- Maintain current best practices;
- Manage conflicts in the workplace;
- Manage conflicting priorities;
- Manage workplace diversity (inter-generational, ethnic,cultural);
- Problem solve issues from technical to human resources;
- Develop staff skills, including CE/CPD opportunities;
- Recruit and retain talent;
- Maintain a safe workplace.
So can I assume that the six 'Post-AI Laboratory Roles' are just add-ons, more or less minor tweaks, to what today's managers already do versus a revolutionary change? Is artificial intelligence and machine learning that big a deal? Will it consume a manager's time as the be all and end all? Or is it just one of many changes that laboratory professionals have adapted to over the decades. Are AI roles more critical than traditional managerial roles? You tell me.
As always comments are most welcome. See below.
Addendum
My reply to Anonymous's comment below, who writes, "A huge concern I have centres around the data chosen for algorithms used for AI decisions" and mentions two books:
- Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil (2016). I've linked it to a book review. The reviewer writes:
- "The core theme of the book is to dispel a widely held misconception of mathematical models and their results being fair, objective and unbiased."
- "She believes that these models are here to stay but they need to used with caution; appropriate regulation is required to ensure that humans are not treated simply as collateral damage for the sake of efficiency."
- Talk by O'Neil on Weapons of Math Destruction and algorithms (12:15 min. video) | I started it part way through.
- "The book touches on many themes:...artificial intelligence AI, ...but its real subject is moral choice.
- "The epigraph quotes Rudyard Kipling’s poem “The Secret of the Machines”, which presciently expresses the uncompromising quality of the machine mind. “We are not built to comprehend a lie,” the poem goes.
- "In Adam’s digital brain [he's a robot], there may be fuzzy logic, but there’s no fuzzy morality. This clarity gives him an inhuman iciness."
FOR FUN
I chose a 1972 song by Johnny Nash (who often collaborated with Jamaica's Bob Marley) and admit it's somewhat tongue in cheek as I'm skeptical of AI's use in medicine, including laboratory medicine and transfusion. Admit it has much promise but has yet to deliver due to obstacles (See Artificial intelligence and digital pathology: challenges and opportunities, Further Reading).
- I can see clearly now (Johnny Nash)
Artificial intelligence: a primer for the laboratory leader (18 Nov. 2019)
AI can help labs manage data to improve stewardship. New artificial intelligence technologies improve patient care and lower laboratory costs (21 Nov. 2019)
8 Management skills you need to be a laboratory manager (10 Mar. 2019)
For pathologists:
Tizhoosh HR, Pantanowitz L. Artificial intelligence and digital pathology: challenges and opportunities. J Pathol Inform. 2018 Nov 14;9:38.
Making artificial intelligence real in pathology and lab medicine (Pathology Chair's blog, Lydia Howell, MD, 1 Feb. 2018)
Another great blog, Blut!
ReplyDeleteA huge concern I have centres around the data chosen for algorithms used for AI decisions.
e.g. just ignore that aberrant result because it occurs so rarely.
Further reading: “Weapons of Math Destruction - how big data increases inequality and threatens democracy by Cathy O’Neil 2016
“Machines Like Me by Ian McEwan 2019
Thanks, Anonymous. I'll reply fully in the body of the blog.
DeleteCheers, Pat